机器学习的普及增加了不公平模型的风险,该模型被部署在高级应用程序中,例如司法系统,药物/疫苗接种设计和医学诊断。尽管有有效的方法可以从头开始训练公平模型,但如何自动揭示和解释受过训练的模型的不公平仍然是一项艰巨的任务。以可解释的方式揭示机器学习模型的不公平是朝着公平和值得信赖的AI迈出的关键一步。在本文中,我们系统地解决了通过挖掘可解释的证据(Rumie)来揭示不公平模型的新任务。关键思想是以一组模型区分的数据实例的形式找到可靠的证据。为了使证据可以解释,我们还找到了一组人为理解的关键属性和决策规则,这些属性和决策规则表征了歧视的数据实例,并将其与其他非歧视数据区分开来。正如在许多现实世界数据集上进行的广泛实验所证明的那样,我们的方法找到了高度可解释和可靠的证据,可以有效揭示受过训练的模型的不公平性。此外,它比所有基线方法更可扩展。
translated by 谷歌翻译
良好的善解人意对话系统应首先跟踪并理解用户的情绪,然后以适当的情感回复。但是,目前对此任务的方法要么集中于提高对用户情绪的理解或提出更好的反应策略,而且很少有作品同时考虑这两种工作。我们的工作试图填补这一空缺。受到任务导向对话系统的启发,我们提出了一种具有情感感知对话管理的新颖善解人意的响应生成模型。情绪感知对话管理包含两个部分:(1)情绪状态跟踪保持当前用户的情绪状态,(2)善解人意的对话策略选择预测目标情绪和用户的意图,基于情绪状态跟踪的结果。然后,预测信息用于指导响应的产生。实验结果表明,与自动评估和人类评估下的几个基准相比,动态管理不同的信息可以帮助模型产生更多的移情反应。
translated by 谷歌翻译
非IID数据对联邦学习产生了艰难的挑战。在本文中,我们探讨了促进具有类似数据的客户端之间的成对合作的新颖思想。我们提出了Fedamp,一种采用联合细心信息的新方法,以促进类似客户协作更多。我们为凸和非凸模型建立了FedAMP的收敛,并提出了一种启发式方法,以进一步提高FEDAMP作为个性化模型时的联邦神经网络的性能。我们对基准数据集的广泛实验证明了所提出的方法的卓越性能。
translated by 谷歌翻译
Data trading is essential to accelerate the development of data-driven machine learning pipelines. The central problem in data trading is to estimate the utility of a seller's dataset with respect to a given buyer's machine learning task, also known as data valuation. Typically, data valuation requires one or more participants to share their raw dataset with others, leading to potential risks of intellectual property (IP) violations. In this paper, we tackle the novel task of preemptively protecting the IP of datasets that need to be shared during data valuation. First, we identify and formalize two kinds of novel IP risks in visual datasets: data-item (image) IP and statistical (dataset) IP. Then, we propose a novel algorithm to convert the raw dataset into a sanitized version, that provides resistance to IP violations, while at the same time allowing accurate data valuation. The key idea is to limit the transfer of information from the raw dataset to the sanitized dataset, thereby protecting against potential intellectual property violations. Next, we analyze our method for the likely existence of a solution and immunity against reconstruction attacks. Finally, we conduct extensive experiments on three computer vision datasets demonstrating the advantages of our method in comparison to other baselines.
translated by 谷歌翻译
Federated learning (FL) is an effective technique to directly involve edge devices in machine learning training while preserving client privacy. However, the substantial communication overhead of FL makes training challenging when edge devices have limited network bandwidth. Existing work to optimize FL bandwidth overlooks downstream transmission and does not account for FL client sampling. In this paper we propose GlueFL, a framework that incorporates new client sampling and model compression algorithms to mitigate low download bandwidths of FL clients. GlueFL prioritizes recently used clients and bounds the number of changed positions in compression masks in each round. Across three popular FL datasets and three state-of-the-art strategies, GlueFL reduces downstream client bandwidth by 27% on average and reduces training time by 29% on average.
translated by 谷歌翻译
在现实世界中,物体的发生频率是自然倾斜的形成长尾级分布,这导致统计上罕见的阶级的性能不佳。有希望的解决方案是挖掘尾级示例以平衡培训数据集。但是,采矿尾级示例是一个非常具有挑战性的任务。例如,由于数据中的偏差导致的类概率失真,大多数基于不确定性的挖掘方法接近斗争。在这项工作中,我们提出了一种有效,但简单的方法来克服这些挑战。我们的框架增强了Subdued Tail-Class的激活,此后,使用单级数据为中心的方法来有效地识别尾级示例。我们对三个数据集的框架进行了详尽的评估,这些数据集超过了两台计算机愿景任务。少数民族挖掘和微调模型的性能大量改善强烈证实了我们提出的解决方案的价值。
translated by 谷歌翻译
最近经过彻底调查了变压器多头自我关注机制。一方面,研究人员对理解为什么以及变压器如何工作。另一方面,他们提出了新的注意增强方法,使变压器更准确,高效和可解释。在本文中,我们在循环管道中协同促使这两条研究线,首先找到了重要的任务特定的注意模式。然后应用那些模式,不仅应用于原始模型,还应用于较小的模型,作为人类引导的知识蒸馏过程。在提取摘要任务的情况下,在案例研究中对我们的管道的好处。在受欢迎的Bertsum模型中找到三种有意义的关注模式之后,实验表明,当我们注入这种模式时,原始和较小模型都显示出性能的改进,并且可以说是可争议的解释性。
translated by 谷歌翻译
在高措施应用中大量部署图神经网络(GNNS)对对噪声的强大解释产生了强烈的需求,这些解释与人类的直觉很好。大多数现有方法通过识别与预测有很强相关性的输入图的子图来生成解释。这些解释对噪声并不强大,因为独立优化单个输入的相关性很容易过分拟合噪声。此外,它们与人类直觉并不十分吻合,因为从输入图中删除已识别的子图并不一定会改变预测结果。在本文中,我们提出了一种新颖的方法,可以通过在类似的输入图上明确建模GNNS的共同决策逻辑来生成对GNN的强大反事实解释。我们的解释自然对噪声是强大的,因为它们是由控制许多类似输入图的GNN的共同决策边界产生的。该解释也与人类的直觉很好地吻合,因为从输入图中的解释中删除了一组边缘,从而显着改变了预测。许多公共数据集上的详尽实验证明了我们方法的出色性能。
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译